首页 | 官方网站   微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   451篇
  免费   21篇
工业技术   472篇
  2023年   3篇
  2022年   4篇
  2021年   29篇
  2020年   20篇
  2019年   11篇
  2018年   10篇
  2017年   8篇
  2016年   11篇
  2015年   15篇
  2014年   19篇
  2013年   32篇
  2012年   18篇
  2011年   33篇
  2010年   29篇
  2009年   23篇
  2008年   25篇
  2007年   22篇
  2006年   19篇
  2005年   17篇
  2004年   17篇
  2003年   8篇
  2002年   9篇
  2001年   3篇
  2000年   8篇
  1999年   7篇
  1998年   7篇
  1997年   9篇
  1996年   7篇
  1995年   4篇
  1994年   4篇
  1993年   4篇
  1992年   2篇
  1991年   4篇
  1990年   1篇
  1989年   1篇
  1986年   3篇
  1985年   1篇
  1984年   4篇
  1982年   2篇
  1981年   1篇
  1980年   1篇
  1979年   2篇
  1978年   2篇
  1977年   2篇
  1976年   1篇
  1975年   5篇
  1973年   1篇
  1967年   1篇
  1966年   2篇
  1957年   1篇
排序方式: 共有472条查询结果,搜索用时 109 毫秒
11.
Two-Photon Lithography, thanks to its very high sub-diffraction resolution, has become the lithographic technique par excellence in applications requiring small feature sizes and complex 3D pattering. Despite this, the fabrication times required for extended structures remain much longer than those of other competing techniques (UV mask lithography, nanoimprinting, etc.). Its low throughput prevents its wide adoption in industrial applications. To increase it, over the years different solutions have been proposed, although their usage is difficult to generalize and may be limited depending on the specific application. A promising strategy to further increase the throughput of Two-Photon Lithography, opening a concrete window for its adoption in industry, lies in its combination with holography approaches: in this way it is possible to generate dozens of foci from a single laser beam, thus parallelizing the fabrication of periodic structures, or to engineer the intensity distribution on the writing plane in a complex way, obtaining 3D microstructures with a single exposure. Here, the fundamental concepts behind high-speed Two-Photon Lithography and its combination with holography are discussed, and the literary production of recent years that exploits such techniques is reviewed, and contextualized according to the topic covered.  相似文献   
12.
We investigate the approximation ratio of the solutions achieved after a one-round walk in linear congestion games. We consider the social functions Sum, defined as the sum of the players’ costs, and Max, defined as the maximum cost per player, as a measure of the quality of a given solution. For the social function Sum and one-round walks starting from the empty strategy profile, we close the gap between the upper bound of \(2+\sqrt{5}\approx 4.24\) given in Christodoulou et al. (Proceedings of the 23rd International Symposium on Theoretical Aspects of Computer Science (STACS), LNCS, vol. 3884, pp. 349–360, Springer, Berlin, 2006) and the lower bound of 4 derived in Caragiannis et al. (Proceedings of the 33rd International Colloquium on Automata, Languages and Programming (ICALP), LNCS, vol. 4051, pp. 311–322, Springer, Berlin, 2006) by providing a matching lower bound whose construction and analysis require non-trivial arguments. For the social function Max, for which, to the best of our knowledge, no results were known prior to this work, we show an approximation ratio of \(\Theta(\sqrt[4]{n^{3}})\) (resp. \(\Theta(n\sqrt{n})\)), where n is the number of players, for one-round walks starting from the empty (resp. an arbitrary) strategy profile.  相似文献   
13.
Natural scene categorization from images represents a very useful task for automatic image analysis systems. In the literature, several methods have been proposed facing this issue with excellent results. Typically, features of several types are clustered so as to generate a vocabulary able to describe in a multi-faceted way the considered image collection. This vocabulary is formed by a discrete set of visual codewords whose co-occurrence and/or composition allows to classify the scene category. A common drawback of these methods is that features are usually extracted from the whole image, actually disregarding whether they derive properly from the natural scene to be classified or from foreground objects, possibly present in it, which are not peculiar for the scene. As quoted by perceptual studies, objects present in an image are not useful to natural scene categorization, indeed bringing an important source of clutter, in dependence of their size.  相似文献   
14.
From Images to Shape Models for Object Detection   总被引:2,自引:0,他引:2  
We present an object class detection approach which fully integrates the complementary strengths offered by shape matchers. Like an object detector, it can learn class models directly from images, and can localize novel instances in the presence of intra-class variations, clutter, and scale changes. Like a shape matcher, it finds the boundaries of objects, rather than just their bounding-boxes. This is achieved by a novel technique for learning a shape model of an object class given images of example instances. Furthermore, we also integrate Hough-style voting with a non-rigid point matching algorithm to localize the model in cluttered images. As demonstrated by an extensive evaluation, our method can localize object boundaries accurately and does not need segmented examples for training (only bounding-boxes).  相似文献   
15.
This paper presents an algorithm for roadway path extraction and tracking and its implementation in a Field Programmable Gate Array (FPGA) device. The implementation is particularly suitable for use as a core component of a Lane Departure Warning (LDW) system, which requires high-performance digital image processing as well as low-cost semiconductor devices, appropriate for the high volume production of the automotive market. The FPGA technology proved to be a proper platform to meet these two contrasting requirements. The proposed algorithm is specifically designed to be completely embedded in FPGA hardware to process wide VGA resolution video sequences at 30 frames per second. The main contributions of this work lie in (i) the proper selection, customization and integration of the main functions for road extraction and tracking to cope with the addressed application, and (ii) the subsequent FPGA hardware implementation as a modular architecture of specialized blocks. Experiments on real road scenario video sequences running on the FPGA device illustrate the good performance of the proposed system prototype and its ability to adapt to varying common roadway conditions, without the need for a per-installation calibration procedure.  相似文献   
16.
This paper presents a distributed architecture for the provision of seamless and responsive mobile multimedia services. This architecture allows its user applications to use concurrently all the wireless network interface cards (NICs) a mobile terminal is equipped with. In particular, as mobile multimedia services are usually implemented using the UDP protocol, our architecture enables the transmission of each UDP datagram through the “most suitable” (e.g. most responsive, least loaded) NIC among those available at the time a datagram is transmitted. We term this operating mode of our architecture Always Best Packet Switching (ABPS). ABPS enables the use of policies for load balancing and recovery purposes. In essence, the architecture we propose consists of the following two principal components: (i) a fixed proxy server, which acts as a relay for the mobile node and enables communications from/to this node regardless of possible firewalls and NAT systems, and (ii) a proxy client running in the mobile node responsible for maintaining a multi-path tunnel, constructed out of all the node's NICs, with the above mentioned fixed proxy server. We show how the architecture supports multimedia applications based on the SIP and RTP/RTCP protocols, and avoids the typical delays introduced by the two way message/response handshake of the SIP signaling protocol. Experimental results originated from the implementation of a VoIP application on top of the architecture we propose show the effectiveness of our approach.  相似文献   
17.
Motivated by the increasing interest of the Computer Science community in the study and understanding of non-cooperative systems, we present a novel model for formalizing the rational behavior of agents with a more farsighted view of the consequences of their actions. This approach yields a framework creating new equilibria, which we call Second Order equilibria, starting from a ground set of traditional ones. By applying our approach to pure Nash equilibria, we define the set of Second Order pure Nash equilibria and present their applications to the Prisoner’s Dilemma game, to an instance of Braess’s Paradox in the Wardrop model and to the KP model with identical machines.  相似文献   
18.
The Clean Seas project focused on the role that existing Earth observing satellites might play in monitoring marine pollution. Results are presented here from August 1997, for the North Sea test site, using sea surface temperature (SST), colour and synthetic aperture radar (SAR) images in conjunction with a hydrodynamic model. There was good correlation between data sources, e.g. between SST and ERS-2 SAR images. Both datasets showed the development of fine plume structures close to the Rhine outflow, apparently associated with the outflow, and possibly caused by tidal pulsing of the Rhine Plume.

The model reproduced general temperature and sediment distributions well, but fine structures were not reproduced. Model sediment distribution patterns were verified using ‘chlorophyll concentration’ data from colour sensors, representative of sediment concentration in turbid water. In conjunction with the visible channels of the Advanced Very High Resolution Radiometer and Along-Track Scanning Radiometer, they give an uncalibrated measure of the sediment load. The model gives a more complete picture of the temporal dispersion of the Rhine Plume over time than is evident from the remotely sensed data alone.  相似文献   
19.
The interest of space observations of ocean colour for determining variations in phytoplankton distribution and for deriving primary production (via models) has been largely demonstrated by the Coastal Zone Color Scanner (CZCS) which operated from 1978 to 1986. The capabilities of this pioneer sensor, however, were limited both in spectral resolution and radiometric accuracy. The next generation of ocean colour sensors will benefit from major improvements. The Medium Resolution Imaging Spectrometer (MERIS), planned by the European Space Agency (ESA) for the Envisat platform, has been designed to measure radiances in 15 visible and infrared channels. Three infrared channels will allow aerosol characterization, and therefore accurate atmospheric corrections, to be performed for each pixel. For the retrieval of marine parameters, nine channels between 410 and 705nm will be available (as opposed to only four with the CZCS). In coastal waters this should, in principle, allow a separate quantification of different substances (phytoplankton, mineral particles, yellow substance) to be performed. In open ocean waters optically dominated by phytoplankton and their associate detrital matter, the basic information (i.e. the concentration of phytoplanktonic pigments) will be retrieved with improved accuracy due to the increased radiometric performances of MERIS. The adoption of multi-wavelength algorithms could also lead to additional information concerning auxiliary pigments and taxonomic groups. Finally, MERIS will be one of the first sensors to allow measurements of Sun-induced chlorophyll a in vivo fluorescence, which could provide a complementary approach for the assessment of phytoplankton abundance. The development of these next-generation algorithms, however, requires a number of fundamental studies.  相似文献   
20.
In order to overcome the limitations of defining industrial specializations in digital industries through SIC codes, this paper suggests measuring the specializations and competences of these industries on the basis of the degree of digital technologies present in the products and services supplied. Metadata from CrunchBase are employed, as proxies of firms' specializations and competences which are defined as the fields of activity in which firms are involved. Applying a network analysis, these specializations and competences are linked to the recognition of emerging digital technologies and the strongest combinations of products and services. We tested the proposed methodology on London, a leading centre for the digital economy.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号